48 research outputs found

    Artifact Lifecycle Discovery

    Get PDF
    Artifact-centric modeling is a promising approach for modeling business processes based on the so-called business artifacts - key entities driving the company's operations and whose lifecycles define the overall business process. While artifact-centric modeling shows significant advantages, the overwhelming majority of existing process mining methods cannot be applied (directly) as they are tailored to discover monolithic process models. This paper addresses the problem by proposing a chain of methods that can be applied to discover artifact lifecycle models in Guard-Stage-Milestone notation. We decompose the problem in such a way that a wide range of existing (non-artifact-centric) process discovery and analysis methods can be reused in a flexible manner. The methods presented in this paper are implemented as software plug-ins for ProM, a generic open-source framework and architecture for implementing process mining tools

    Formal Modelling of Goals in Organizations

    Get PDF
    Each organization exists or is created for the achievement of one or more goals. To ensure continued success, the organization should monitor its performance with respect to the formulated goals. In practice the performance of an organization is often evaluated by estimating its performance indicators. In most existing approaches on organization modelling the relation between performance indicators and goals remains implicit. This paper proposes a formal framework for modelling goals based on performance indicators and defines mechanisms for establishing goal satisfaction, which enable evaluation of organizational performance. Methodological and analysis issues related to goals are discussed in the paper. The described framework is a part of a general framework for organization modelling and analysis

    Constraint-based Modelling of Organisations

    Get PDF
    Modern organisations are characterised by a great variety of forms and often involve many actors with diverse goals, performing a wide range of tasks in changing environmental conditions. Due to high complexity, mistakes and inconsistencies are not rare in organisations. To provide better insights into the organisational operation and to identify different types of organisational problems explicit specification of relations and rules, on which the structure and behaviour of an organisation are based, is required. Before it is used, the specification of an organisation should be checked for internal consistency and validity w.r.t. the domain. To this end, the paper introduces a framework for formal specification of constraints that ensure the correctness of organisational specifications. To verify the satisfaction of constraints, efficient and scalable algorithms have been developed and implemented. The application of the proposed approach is illustrated by a case study from the air traffic domain

    A specification language for organisational performance indicators

    Get PDF
    A specification language for performance indicators and their relations and requirements is presented and illustrated for a case study in logistics. The language can be used in different forms, varying from informal, semiformal, graphical to formal. A software environment has been developed that supports the specification process and can be used to automatically check whether performance indicators or relations between them or certain requirements over them are satisfied in a given organisational process

    Bankruptcy Prediction with Rough Sets

    Get PDF
    The bankruptcy prediction problem can be considered an or dinal classification problem. The classical theory of Rough Sets describes objects by discrete attributes, and does not take into account the order- ing of the attributes values. This paper proposes a modification of the Rough Set approach applicable to monotone datasets. We introduce re- spectively the concepts of monotone discernibility matrix and monotone (object) reduct. Furthermore, we use the theory of monotone discrete functions developed earlier by the first author to represent and to com- pute decision rules. In particular we use monotone extensions, decision lists and dualization to compute classification rules that cover the whole input space. The theory is applied to the bankruptcy prediction problem

    Monotone Decision Trees and Noisy Data

    Get PDF
    The decision tree algorithm for monotone classification presented in [4, 10] requires strictly monotone data sets. This paper addresses the problem of noise due to violation of the monotonicity constraints and proposes a modification of the algorithm to handle noisy data. It also presents methods for controlling the size of the resulting trees while keeping the monotonicity property whether the data set is monotone or not

    Understanding Performance Measurement and Control in Third Party Logistics

    Get PDF
    Each individual planning process starts with clear objectives. These objectives specify the goals for the planning process, which is assigning tasks to resources at a certain point in time. Planning systems, whether manual or computerized, should utilize those objectives in their planning. In order to gain insight in the specific planning objectives for the Logistical Service Providers (LSPs) domain, we study the Key Performance Indicators (KPIs) related to this field. Typically, KPIs are used in a postante context: to evaluate the past performance of a company. We reason that KPIs could be utilized differently as well: if one knows what counts afterwards, it would be logical to anticipate this in the planning phase. This paper thus focuses on the performance parameters and objectives that play a role in the logistical planning process. In order to gain insight in the factors that should play a role when designing a new software system for planning and control of LSP operations. In this paper we present an extensive literature survey we performed, and introduce a framework that is capable of capturing the dynamics of competing KPIs, while positioning them in the practical context of an LSP. We conclude with a first validation of the framework and a roadmap for our future work, with the design of a software agentbased planning system for (road-) logistics as an intended final result

    Induction of Ordinal Decision Trees

    Get PDF
    This paper focuses on the problem of monotone decision trees from the point of view of the multicriteria decision aid methodology (MCDA). By taking into account the preferences of the decision maker, an attempt is made to bring closer similar research within machine learning and MCDA. The paper addresses the question how to label the leaves of a tree in a way that guarantees the monotonicity of the resulting tree. Two approaches are proposed for that purpose - dynamic and static labeling which are also compared experimentally. The paper further considers the problem of splitting criteria in the con- text of monotone decision trees. Two criteria from the literature are com- pared experimentally - the entropy criterion and the number of con criterion - in an attempt to find out which one fits better the specifics of the monotone problems and which one better handles monotonicity noise

    Knowledge Discovery and Monotonicity

    Get PDF
    The monotonicity property is ubiquitous in our lives and it appears in different roles: as domain knowledge, as a requirement, as a property that reduces the complexity of the problem, and so on. It is present in various domains: economics, mathematics, languages, operations research and many others. This thesis is focused on the monotonicity property in knowledge discovery and more specifically in classification, attribute reduction, function decomposition, frequent patterns generation and missing values handling. Four specific problems are addressed within four different methodologies, namely, rough sets theory, monotone decision trees, function decomposition and frequent patterns generation. In the first three parts, the monotonicity is domain knowledge and a requirement for the outcome of the classification process. The three methodologies are extended for dealing with monotone data in order to be able to guarantee that the outcome will also satisfy the monotonicity requirement. In the last part, monotonicity is a property that helps reduce the computation of the process of frequent patterns generation. Here the focus is on two of the best algorithms and their comparison both theoretically and experimentally. About the Author: Viara Popova was born in Bourgas, Bulgaria in 1972. She followed her secondary education at Mathematics High School "Nikola Obreshkov" in Bourgas. In 1996 she finished her higher education at Sofia University, Faculty of Mathematics and Informatics where she graduated with major in Informatics and specialization in Information Technologies in Education. She then joined the Department of Information Technologies, First as an associated member and from 1997 as an assistant professor. In 1999 she became a PhD student at Erasmus University Rotterdam, Faculty of Economics, Department of Computer Science. In 2004 she joined the Artificial Intelligence Group within the Department of Computer Science, Faculty of Sciences at Vrije Universiteit Amsterdam as a PostDoc researcher.This thesis is positioned in the area of knowledge discovery with special attention to problems where the property of monotonicity plays an important role. Monotonicity is a ubiquitous property in all areas of life and has therefore been widely studied in mathematics. Monotonicity in knowledge discovery can be treated as available background information that can facilitate and guide the knowledge extraction process. While in some sub-areas methods have already been developed for taking this additional information into account, in most methodologies it has not been extensively studied or even has not been addressed at all. This thesis is a contribution to a change in that direction. In the thesis, four specific problems have been examined from different sub-areas of knowledge discovery: the rough sets methodology, monotone decision trees, function decomposition and frequent patterns discovery. In the first three parts, the monotonicity is domain knowledge and a requirement for the outcome of the classification process. The three methodologies are extended for dealing with monotone data in order to be able to guarantee that the outcome will also satisfy the monotonicity requirement. In the last part, monotonicity is a property that helps reduce the computation of the process of frequent patterns generation. Here the focus is on two of the best algorithms and their comparison both theoretically and experimentally
    corecore